Direct S3 Data Access with GDAL Virtual Raster Format (VRT)


Timing:

  • Exercise: 20 minutes

Summary

Hello World


Exercise

Import Required Packages

%matplotlib inline
import matplotlib.pyplot as plt
from datetime import datetime
import os
import subprocess
import requests
import boto3
from pystac_client import Client
from collections import defaultdict
import numpy as np
import xarray as xr
import rasterio as rio
from rasterio.session import AWSSession
from rasterio.plot import show
import rioxarray
import geopandas
import pyproj
from pyproj import Proj
from shapely.ops import transform
import geoviews as gv
from cartopy import crs
import hvplot.xarray
import holoviews as hv
gv.extension('bokeh', 'matplotlib')

Get Temporary Credentials and Configure Local Environment

To perform direct S3 data access one needs to acquire temporary S3 credentials. The credentials give users direct access to S3 buckets in NASA Earthdata Cloud. AWS credentials should not be shared, so take precautions when using them in notebooks our scripts. Note, these temporary credentials are valid for only 1 hour. For more information regarding the temporary credentials visit https://data.lpdaac.earthdatacloud.nasa.gov/s3credentialsREADME.

def get_temp_creds():
    temp_creds_url = 'https://data.lpdaac.earthdatacloud.nasa.gov/s3credentials'
    return requests.get(temp_creds_url).json()
temp_creds_req = get_temp_creds()
#temp_creds_req                      # !!! BEWARE, removing the # on this line will print your temporary S3 credentials.

Insert the credentials into our boto3 session and configure out rasterio environment for data access

Create a boto3 Session object using your temporary credentials. This Session can then be used to pass those credentials and get S3 objects from applicable buckets.

session = boto3.Session(aws_access_key_id=temp_creds_req['accessKeyId'], 
                        aws_secret_access_key=temp_creds_req['secretAccessKey'],
                        aws_session_token=temp_creds_req['sessionToken'],
                        region_name='us-west-2')

For this exercise, we are going to open up a context manager for the notebook using the rasterio.env module to store the required GDAL and AWS configurations we need to access the data in Earthdata Cloud. While the context manager is open (rio_env.__enter__()) we will be able to run the open or get data commands that would typically be executed within a with statement, thus allowing us to more freely interact with the data. We’ll close the context (rio_env.__exit__()) at the end of the notebook.

GDAL environment variables must be configured to access Earthdata Cloud data assets. Geospatial data access Python packages like rasterio and rioxarray depend on GDAL, leveraging GDAL’s “Virtual File Systems” to read remote files. GDAL has a lot of environment variables that control it’s behavior. Changing these settings can mean the difference being able to access a file or not. They can also have an impact on the performance.

rio_env = rio.Env(AWSSession(session),
                  GDAL_DISABLE_READDIR_ON_OPEN='TRUE',
                  GDAL_HTTP_COOKIEFILE=os.path.expanduser('~/cookies.txt'),
                  GDAL_HTTP_COOKIEJAR=os.path.expanduser('~/cookies.txt'))
rio_env.__enter__()
<rasterio.env.Env at 0x7fdb42409c10>

Read in geoJSON for subsetting

We will use the input geoJSON file to clip the source data to our desired region of interest.

field = geopandas.read_file('./data/ne_w_agfields.geojson')
fieldShape = field['geometry'][0]  

To clip the source data to our input feature boundary, we need to transform the feature boundary from its original WGS84 coordinate reference system to the projected reference system of the source HLS file (i.e., UTM Zone 13).

foa_url = red_s3_links[0]
with rio.open(foa_url) as src:
    hls_proj = src.crs.to_string()

hls_proj    
'EPSG:32613'

Transform geoJSON feature from WGS84 to UTM

geo_CRS = Proj('+proj=longlat +datum=WGS84 +no_defs', preserve_units=True)   # Source coordinate system of the ROI
project = pyproj.Transformer.from_proj(geo_CRS, hls_proj)                    # Set up the transformation
fsUTM = transform(project.transform, fieldShape)

Direct S3 Data Access

Start up a dask client

#from dask.distributed import Client
#client = Client(n_workers=2)
#client

There are multiple way to read COG data in as a time series. The subprocess package is used in this example to run GDAL’s build virtual raster file (gdalbuildvrt) executable outside our python session. First we’ll need to construct a string object with the command and it’s parameter parameters (including our temporary credentials). Then, we run the command using the subprocess.call() function.

Build GDAL VRT Files

Construct the GDAL VRT call
build_red_vrt = f"gdalbuildvrt ./data/red_stack.vrt -separate -input_file_list ./data/S3_T13TGF_RED_VSI_Links.txt --config AWS_ACCESS_KEY_ID {temp_creds_req['accessKeyId']} --config AWS_SECRET_ACCESS_KEY {temp_creds_req['secretAccessKey']} --config AWS_SESSION_TOKEN {temp_creds_req['sessionToken']} --config GDAL_DISABLE_READDIR_ON_OPEN TRUE"
#build_red_vrt    # !!! BEWARE, removing the # on this line will print your temporary S3 credentials.

We now have a fully configured gdalbuildvrt string that we can pass to Python’s subprocess module to run the gdalbuildvrt executable outside our Python environment.

Reading in an HLS time series

We can now read the VRT files into our Python session. A drawback of reading VRTs into Python is that the time coordinate variable needs to be contructed. Below we not only read in the VRT file using rioxarray, but we also repurpose the band variable, which is generated automatically, to hold out time information.

Read the RED VRT in as xarray with Dask backing

%%time

chunks=dict(band=1, x=1024, y=1024)
#chunks=dict(band=1, x=512, y=512)
red = rioxarray.open_rasterio('./data/red_stack.vrt', chunks=chunks)                    # Read in VRT
red = red.rename({'band':'time'})                                                       # Rename the 'band' coordinate variable to 'time' 
red['time'] = [datetime.strptime(x.split('.')[-5], '%Y%jT%H%M%S') for x in links_vsi]   # Extract the time information from the input file names and assign them to the time coordinate variable
red = red.sortby('time')                                                                # Sort by the time coordinate variable
red
CPU times: user 219 ms, sys: 20.3 ms, total: 239 ms
Wall time: 246 ms
<xarray.DataArray (time: 19, y: 3660, x: 3660)>
dask.array<getitem, shape=(19, 3660, 3660), dtype=int16, chunksize=(1, 1024, 1024), chunktype=numpy.ndarray>
Coordinates:
  * time         (time) datetime64[ns] 2021-05-13T17:24:06 ... 2021-08-17T17:...
  * x            (x) float64 7e+05 7e+05 7e+05 ... 8.097e+05 8.097e+05 8.097e+05
  * y            (y) float64 4.6e+06 4.6e+06 4.6e+06 ... 4.49e+06 4.49e+06
    spatial_ref  int64 0
Attributes:
    _FillValue:    -9999.0
    scale_factor:  0.0001
    add_offset:    0.0

Above we use the parameter chunk in the rioxarray.open_rasterio() function to enable the Dask backing. What this allows is lazy reading of the data, which means the data is not actually read in into memory at this point. What we have is an object with some metadata and pointer to the source data. The data will be streamed to us when we call for it, but not stored in memory until with call the Dask compute() or persist() methods.

Clip out the ROI and persist the result in memory

Up until now, we haven’t read any of the HLS data into memory. Now we will use the persist() method to load the data into memory.

red_clip = red.rio.clip([fsUTM]).persist()
red_clip
<xarray.DataArray (time: 19, y: 56, x: 56)>
dask.array<astype, shape=(19, 56, 56), dtype=int16, chunksize=(1, 56, 56), chunktype=numpy.ndarray>
Coordinates:
  * y            (y) float64 4.551e+06 4.551e+06 ... 4.549e+06 4.549e+06
  * x            (x) float64 7.796e+05 7.796e+05 ... 7.812e+05 7.812e+05
  * time         (time) datetime64[ns] 2021-05-13T17:24:06 ... 2021-08-17T17:...
    spatial_ref  int64 0
Attributes:
    scale_factor:  0.0001
    add_offset:    0.0
    _FillValue:    -9999

Above, we persisted the clipped results to memory using the persist() method. This doesn’t necessarily need to be done, but it will substantially improve the performance of the visualization of the time series below.

Plot red_clip with hvplot

red_clip.hvplot.image(x='x', y='y', width=800, height=600, colorbar=True, cmap='Reds').opts(clim=(0.0, red_clip.values.max()))

Read in the NIR and Fmask VRT files

%%time
chunks=dict(band=1, x=1024, y=1024)
nir = rioxarray.open_rasterio('./data/nir_stack.vrt', chunks=chunks)                    # Read in VRT
nir = nir.rename({'band':'time'})                                                       # Rename the 'band' coordinate variable to 'time' 
nir['time'] = [datetime.strptime(x.split('.')[-5], '%Y%jT%H%M%S') for x in links_vsi]   # Extract the time information from the input file names and assign them to the time coordinate variable
nir = nir.sortby('time')                                                                # Sort by the time coordinate variable
nir
CPU times: user 69.9 ms, sys: 216 µs, total: 70.1 ms
Wall time: 81.9 ms
<xarray.DataArray (time: 19, y: 3660, x: 3660)>
dask.array<getitem, shape=(19, 3660, 3660), dtype=int16, chunksize=(1, 1024, 1024), chunktype=numpy.ndarray>
Coordinates:
  * time         (time) datetime64[ns] 2021-05-13T17:24:06 ... 2021-08-17T17:...
  * x            (x) float64 7e+05 7e+05 7e+05 ... 8.097e+05 8.097e+05 8.097e+05
  * y            (y) float64 4.6e+06 4.6e+06 4.6e+06 ... 4.49e+06 4.49e+06
    spatial_ref  int64 0
Attributes:
    _FillValue:    -9999.0
    scale_factor:  0.0001
    add_offset:    0.0
%%time
chunks=dict(band=1, x=1024, y=1024)
fmask = rioxarray.open_rasterio('./data/fmask_stack.vrt', chunks=chunks)                    # Read in VRT
fmask = fmask.rename({'band':'time'})                                                       # Rename the 'band' coordinate variable to 'time' 
fmask['time'] = [datetime.strptime(x.split('.')[-5], '%Y%jT%H%M%S') for x in links_vsi]     # Extract the time information from the input file names and assign them to the time coordinate variable
fmask = fmask.sortby('time')                                                                # Sort by the time coordinate variable
fmask
CPU times: user 64.6 ms, sys: 85 µs, total: 64.7 ms
Wall time: 74.8 ms
<xarray.DataArray (time: 19, y: 3660, x: 3660)>
dask.array<getitem, shape=(19, 3660, 3660), dtype=uint8, chunksize=(1, 1024, 1024), chunktype=numpy.ndarray>
Coordinates:
  * time         (time) datetime64[ns] 2021-05-13T17:24:06 ... 2021-08-17T17:...
  * x            (x) float64 7e+05 7e+05 7e+05 ... 8.097e+05 8.097e+05 8.097e+05
  * y            (y) float64 4.6e+06 4.6e+06 4.6e+06 ... 4.49e+06 4.49e+06
    spatial_ref  int64 0
Attributes:
    _FillValue:    255.0
    scale_factor:  1.0
    add_offset:    0.0

Create an xarray dataset

We will now combine the RED, NIR, and Fmask arrays into a dataset and create/add a new NDVI variable.

hls_ndvi = xr.Dataset({'red': red, 'nir': nir, 'fmask': fmask, 'ndvi': (nir - red) / (nir + red)})
hls_ndvi
<xarray.Dataset>
Dimensions:      (time: 19, x: 3660, y: 3660)
Coordinates:
  * time         (time) datetime64[ns] 2021-05-13T17:24:06 ... 2021-08-17T17:...
  * x            (x) float64 7e+05 7e+05 7e+05 ... 8.097e+05 8.097e+05 8.097e+05
  * y            (y) float64 4.6e+06 4.6e+06 4.6e+06 ... 4.49e+06 4.49e+06
    spatial_ref  int64 0
Data variables:
    red          (time, y, x) int16 dask.array<chunksize=(1, 1024, 1024), meta=np.ndarray>
    nir          (time, y, x) int16 dask.array<chunksize=(1, 1024, 1024), meta=np.ndarray>
    fmask        (time, y, x) uint8 dask.array<chunksize=(1, 1024, 1024), meta=np.ndarray>
    ndvi         (time, y, x) float64 dask.array<chunksize=(1, 1024, 1024), meta=np.ndarray>

Above, we created a new NDVI variable. Now, we will clip and plot our results.

ndvi_clip = hls_ndvi.ndvi.rio.clip([fsUTM]).persist()
ndvi_clip
/srv/conda/envs/notebook/lib/python3.7/site-packages/dask/core.py:119: RuntimeWarning: divide by zero encountered in true_divide
  return func(*(_execute_task(a, cache) for a in args))
/srv/conda/envs/notebook/lib/python3.7/site-packages/dask/core.py:119: RuntimeWarning: invalid value encountered in true_divide
  return func(*(_execute_task(a, cache) for a in args))
<xarray.DataArray 'ndvi' (time: 19, y: 56, x: 56)>
dask.array<getitem, shape=(19, 56, 56), dtype=float64, chunksize=(1, 56, 56), chunktype=numpy.ndarray>
Coordinates:
  * y            (y) float64 4.551e+06 4.551e+06 ... 4.549e+06 4.549e+06
  * x            (x) float64 7.796e+05 7.796e+05 ... 7.812e+05 7.812e+05
  * time         (time) datetime64[ns] 2021-05-13T17:24:06 ... 2021-08-17T17:...
    spatial_ref  int64 0

Plot NDVI

ndvi_clip.hvplot.image(x='x', y='y', groupby='time', width=800, height=600, colorbar=True, cmap='YlGn').opts(clim=(0.0, 1.0))

You may have notices that some images for some of the time step are ‘blurrier’ than other. This is because they are contaminated in some way, be it clouds, cloud shadows, snow, ice.

Apply quality filter

We want to keep NDVI data values where Fmask equals 0 (no clouds, no cloud shadow, no snow/ice, no water.

ndvi_clip_filter = hls_ndvi.ndvi.where(fmask==0, np.nan).rio.clip([fsUTM]).persist()
/srv/conda/envs/notebook/lib/python3.7/site-packages/dask/core.py:119: RuntimeWarning: divide by zero encountered in true_divide
  return func(*(_execute_task(a, cache) for a in args))
/srv/conda/envs/notebook/lib/python3.7/site-packages/dask/core.py:119: RuntimeWarning: invalid value encountered in true_divide
  return func(*(_execute_task(a, cache) for a in args))
ndvi_clip_filter.hvplot.image(x='x', y='y', groupby='time', width=800, height=600, colorbar=True, cmap='YlGn').opts(clim=(0.0, 1.0))

Aggregate by month

Finally, we will use xarray’s groupby operation to aggregate by month.

ndvi_clip_filter.groupby('time.month').mean('time').hvplot.image(x = 'x', y = 'y', crs = hls_proj, groupby='month', cmap='YlGn', width=800, height=600, colorbar=True).opts(clim=(0.0, 1.0))
rio_env.__exit__()

References

  • https://rasterio.readthedocs.io/en/latest/
  • https://corteva.github.io/rioxarray/stable/index.html
  • https://tutorial.dask.org/index.html
  • https://examples.dask.org/applications/satellite-imagery-geotiff.html